153 research outputs found

    Neural models of language use:Studies of language comprehension and production in context

    Get PDF
    Artificial neural network models of language are mostly known and appreciated today for providing a backbone for formidable AI technologies. This thesis takes a different perspective. Through a series of studies on language comprehension and production, it investigates whether artificial neural networks—beyond being useful in countless AI applications—can serve as accurate computational simulations of human language use, and thus as a new core methodology for the language sciences

    Is Information Density Uniform in Task-Oriented Dialogues?

    Get PDF
    Acknowledgements We would like to thank Jaap Jumelet for a helpful discussion on neural language models, the anonymous EMNLP-2021 reviewers for their valuable comments, as well as the anonymous ACL-2021 reviewers for feedback that led to a considerable improvement of the first version of this paper. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 819455).Publisher PD

    Construction Repetition Reduces Information Rate in Dialogue

    Get PDF
    Speakers repeat constructions frequently in dialogue. Due to their peculiar information-theoretic properties, repetitions can be thought of as a strategy for cost-effective communication. In this study, we focus on the repetition of lexicalised constructions—i.e., recurring multi-word units—in English open-domain spoken dialogues. We hypothesise that speakers use construction repetition to mitigate information rate, leading to an overall decrease in utterance information content over the course of a dialogue. We conduct a quantitative analysis, measuring the information content of constructions and that of their containing utterances, estimating information content with an adaptive neural language model. We observe that construction usage lowers the information content of utterances. This facilitating effect (i) increases throughout dialogues, (ii) is boosted by repetition, (iii) grows as a function of repetition frequency and density, and (iv) is stronger for repetitions of referential constructions

    AnaLog: Testing Analytical and Deductive Logic Learnability in Language Models

    Get PDF
    Acknowledgements We would like to thank the anonymous ARR and *SEM 2022 reviewers for their feedback and suggestions, as well as Ece Takmaz for her comments. Samuel Ryb and Arabella Sinclair worked on this project while affiliated with the University of Amsterdam. The project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 819455). 1The dataset is available at https://github.com/dmg-illc/analogPublisher PD
    corecore